Session B-1

Federated Learning 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 104

Adaptive Configuration for Heterogeneous Participants in Decentralized Federated Learning

Yunming Liao (University of Science and Technology of China, China); Yang Xu (University of Science and Technology of China & School of Computer Science and Technology, China); Hongli Xu and Lun Wang (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA)

2
Data generated at the network edge can be processed locally by leveraging the paradigm of edge computing (EC). Aided by EC, decentralized federated learning (DFL), which overcomes the single-point-of-failure problem in the parameter server (PS) based federated learning, is becoming a practical and popular approach for machine learning over distributed data. However, DFL faces two critical challenges, i.e., system heterogeneity and statistical heterogeneity introduced by edge devices. To ensure fast convergence with the existence of slow edge devices, we present an efficient DFL method, termed FedHP, which integrates adaptive control of both local updating frequency and network topology to better support the heterogeneous participants. We establish a theoretical relationship between local updating frequency and network topology regarding model training performance and obtain a convergence upper bound. Upon this, we propose an optimization algorithm, that adaptively determines local updating frequencies and constructs the network topology, so as to speed up convergence and improve the model accuracy. Evaluation results show that the proposed FedHP can reduce the completion time by about 51% and improve model accuracy by at least 5% in heterogeneous scenarios, compared with the baselines.
Speaker Yunming Liao

Yunming Liao received a B.S. degree in 2020 from the University of Science and Technology of China. He is currently pursuing his Ph.D. degree in the School of Computer Science and Technology, University of Science and Technology of China. His research interests include mobile edge computing and federated learning. 


Asynchronous Federated Unlearning

Ningxin Su and Baochun Li (University of Toronto, Canada)

2
Thanks to regulatory policies such as GDPR, it is essential to provide users with the right to erasure regarding their own data, even if such data has been used to train a model. Such a machine unlearning problem becomes more challenging in the context of federated learning, where clients collaborate to train a global model with their private data. When a client requests its data to be erased, its effects have already permeated through a large number of clients. All of these affected clients need to participate in the retraining process.

In this paper, we present the design and implementation of Knot, a new clustered aggregation mechanism custom-tailored to asynchronous federated learning. The design of Knot is based upon our intuition that client aggregation can be performed within each cluster only so that retraining due to data erasure can be limited to within each cluster as well. To optimize client-cluster assignment, we formulated a lexicographical minimization problem that could be transformed into a linear programming problem and solved efficiently. Over a variety of datasets and tasks, we have shown clear evidence that Knot outperformed the state-of-the-art federated unlearning mechanisms by up to 85% in the context of asynchronous federated learning.
Speaker Jointly Presented by Ningxin Su and Baochun Li (University of Toronto)

Ningxin Su is a third-year Ph.D. student in the Department of Electrical and Computer Engineering, University of Toronto, under the supervision of Prof. Baochun Li. She received her M.E. and B.E. degrees from the University of Sheffield and Beijing University of Posts and Telecommunications in 2020 and 2019, respectively. Her research area includes distributed machine learning, federated learning and networking. Her website is located at ningxinsu.github.io.

Baochun Li is currently a Professor at the Department of Electrical and Computer Engineering, University of Toronto. He is a Fellow of IEEE.


Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization

Heting Liu, Fang He and Guohong Cao (The Pennsylvania State University, USA)

1
Federated learning(FL) enables edge devices(clients) to learn a global model without sharing the local datasets, where each client performs gradient descent with its local data and uploads the gradients to a central server to update the global model. However, FL faces massive communication overhead resulted from the data uploading in each training round. To address the issue, most existing research compresses the gradients with fixed and unified quantization for all the clients, which neither seeks adaptive quantization due to the varying gradient norms in different rounds, nor exploits the heterogeneity of the clients to accelerate FL. In this paper, we propose an adaptive and heterogeneous gradient quantization algorithm(AdaGQ) for FL to minimize the wall-clock training time: i) adaptive quantization exploits the change of gradient norm to adjust the quantization resolution in each training round; ii) heterogeneous quantization assigns lower quantization resolution to slow clients to align their training time with other clients to mitigate the communication bottleneck, and higher quantization resolution to fast clients to achieve a better communication efficiency and accuracy tradeoff. Experiments on various model architectures and datasets validate that AdaGQ reduces the total training time by up to 52.1% compared to baseline algorithms (e.g., FedAvg, QSGD, etc.).
Speaker Heting Liu (The Pennsylvania State University)

Heting Liu is a PhD candidate at The Pennsylvania State University since 2017. Her research interests include edge computing, federated learning, cloud computing and applied machine learning.


Toward Sustainable AI: Federated Learning Demand Response in Cloud-Edge Systems via Auctions

Fei Wang (Beijing University of Posts and Telecommunications, China); Lei Jiao (University of Oregon, USA); Konglin Zhu (Beijing University of Posts and Telecommunications, China); Xiaojun Lin (Purdue University, USA); Lei Li (Beijing University of Posts And Telecommunications, China)

0
Cloud-edge systems are important Emergency Demand Response (EDR) participants that help maintain power grid stability and demand-supply balance. However, as users are increasingly executing artificial intelligence (AI) workloads in cloud-edge systems, existing EDR management has not been designed for AI workloads and thus faces the critical challenges of the complex trade-offs between energy consumption and AI model accuracy, the trickiness of AI model quantization, the restriction of AI training deadlines, and the uncertainty of AI task arrivals. In this paper, targeting Federated Learning (FL), we design an auction-based approach to overcome all these challenges. We firstly formulate a non-linear mixed-integer program for the long-term social welfare optimization. We then design a novel algorithmic approach that generates candidate training schedules, reformulates the original problem into a new schedule selection problem, and solves this new problem via an online primal-dual algorithm which embeds a careful payment design. We further rigorously prove that our approach achieves truthfulness and individual rationality, and leads to a constant competitive ratio for the long-term social welfare. Through extensive evaluations with real-world training data and system settings, we have validated the superior practical performance of our approach over multiple alternative methods.
Speaker Fei Wang (Beijing University of Posts and Telecommunications)

Fei Wang received the masters degree in In- formation and Communication Engineering from Harbin Engineering University, China, in 2021. He is currently working towards the Ph.D. degree in School of Artificial Intelligence in Beijing University of Posts and Telecommunications. His research interests are in the areas of online learning and federated learning. 


Session Chair

Giovanni NEGLIA

Session B-2

Federated Learning 2

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 104

Heterogeneity-Aware Federated Learning with Adaptive Client Selection and Gradient Compression

Zhida Jiang (University of Science and Technology of China, China); Yang Xu (University of Science and Technology of China & School of Computer Science and Technology, China); Hongli Xu and Zhiyuan Wang (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA)

0
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. For the first time, we theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3× speedup compared to other methods.
Speaker Zhida Jiang

Zhida Jiang received the B.S.degree in 2019 from the Hefei University of Technology. He is currently a Ph.D. candidate in the School of Computer Science and Technology, University of Science and Technology of China (USTC). His research interests include mobile edge computing and federated learning


Federated Learning under Heterogeneous and Correlated Client Availability

Angelo Rodio (Inria, France); Francescomaria Faticanti (INRIA, France); Othmane Marfoq (Inria, France & Accenture Technology Labs, France); Giovanni Neglia (Inria, France); Emilio Leonardi (Politecnico di Torino, Italy)

0
The enormous amount of data produced by mobile and IoT devices has motivated the development of federated learning (FL), a framework allowing such devices to collaboratively train machine learning models without sharing their local data. FL algorithms (like FedAvg) iteratively aggregate model updates computed by clients on their own datasets. Clients may exhibit different levels of participation, often correlated over time and with other clients. This paper presents the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and correlated client availability. Our analysis highlights how correlation adversely affects the algorithm's convergence rate and how the aggregation strategy can alleviate this effect at the cost of steering training toward a biased model. Guided by the theoretical analysis, we propose CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias. To this purpose, CA-Fed dynamically adapts the weight given to each client and may ignore clients with low availability and large correlation.
Our experimental results show that CA-Fed has higher time-average accuracy and a lower standard deviation than state-of-the-art AdaFed and F3AST.
Speaker Angelo Rodio (Inria, France)

Angelo Rodio is a third-year Ph.D. student at Inria, France, under the supervision of Prof. Giovanni Neglia and Prof. Alain Jean-Marie. He received his B.E. and M.E. degrees from Politecnico di Bari, Italy, in 2018 and 2020, respectively. As part of a double diploma program, he also obtained his M.E. degree from Université Côte d'Azur, France, in 2020. His research area includes distributed machine learning, federated learning, and networking. His website can be found at https://www-sop.inria.fr/members/Angelo.Rodio.


Federated Learning with Flexible Control

Shiqiang Wang (IBM T. J. Watson Research Center, USA); Jake Perazzone (US Army Research Lab, USA); Mingyue Ji (University of Utah, USA); Kevin S Chan (US Army Research Laboratory, USA)

0
Federated learning (FL) enables distributed model training from local data collected by users. In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem. Existing works have separately considered different configurations to make FL more efficient, such as infrequent transmission of model updates, client subsampling, and compression of update vectors. However, an important open problem is how to jointly apply and tune these control knobs in a single FL algorithm, to achieve the best performance by allowing a high degree of freedom in control decisions. In this paper, we address this problem and propose FlexFL -- an FL algorithm with multiple options that can be adjusted flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation at clients and arbitrary amounts of communication between clients and the server, making both the computation and communication resource consumption adjustable. We prove a convergence upper bound of this algorithm. Based on this result, we further propose a stochastic optimization formulation and algorithm to determine the control decisions that (approximately) minimize the convergence bound, while conforming to constraints related to resource consumption. The advantage of our approach is also verified using experiments.
Speaker Shiqiang Wang (IBM T. J. Watson Research Center, USA)

Shiqiang Wang is a Staff Research Scientist at IBM T. J. Watson Research Center, NY, USA. He received his Ph.D. from Imperial College London, United Kingdom, in 2015. His current research focuses on the intersection of distributed computing, machine learning, networking, and optimization. He has made foundational contributions to edge computing and federated learning that generated both academic and industrial impact. He received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2021, IEEE ComSoc Best Young Professional Award in Industry in 2021, IBM Outstanding Technical Achievement Awards (OTAA) in 2019, 2021, and 2022, and multiple Invention Achievement Awards from IBM since 2016.


FedMoS: Taming Client Drift in Federated Learning with Double Momentum and Adaptive Selection

Xiong Wang and Yuxin Chen (Huazhong University of Science and Technology, China); Yuqing Li (Wuhan University, China); Xiaofei Liao and Hai Jin (Huazhong University of Science and Technology, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Federated learning (FL) enables massive clients to collaboratively train a global model by aggregating their local updates without disclosing raw data. Communication has become one of the main bottlenecks that prolongs the training process, especially under large model variances due to skewed data distributions. Existing efforts mainly focus on either single momentum-based gradient descent, or random client selection for potential variance reduction, yet both often lead to poor model accuracy and system efficiency. In this paper, we propose FedMoS, a communication-efficient FL framework with coupled double momentum-based update and adaptive client selection, to jointly mitigate the intrinsic variance. Specifically, FedMoS maintains customized momentum buffers on both server and client sides, which track global and local update directions to alleviate the model discrepancy. Taking momentum results as input, we design an adaptive selection scheme to provide a proper client representation during FL aggregation. By optimally calibrating clients' selection probabilities, we can effectively reduce the sampling variance, while ensuring unbiased aggregation. Through a rigid analysis, we show that FedMoS can attain the theoretically optimal O(T^{-2/3}) convergence rate. Extensive experiments using real-world datasets further validate the superiority of FedMoS, with 58%-87% communication reduction for achieving the same target performance compared to state-of-the-art techniques.
Speaker Xiong Wamg



Session Chair

Rui Zhang

Session B-3

Federated Learning 3

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 104

A Hierarchical Knowledge Transfer Framework for Heterogeneous Federated Learning

Yongheng Deng and Ju Ren (Tsinghua University, China); Cheng Tang and Feng Lyu (Central South University, China); Yang Liu and Yaoxue Zhang (Tsinghua University, China)

1
Federated learning (FL) enables distributed clients to collaboratively learn a shared model while keeping their raw data private. To mitigate the system heterogeneity issues and overcome the resource constraints of clients, we investigate a novel paradigm in which heterogeneous clients learn uniquely designed models with different architectures, and transfer knowledge to the server to train a larger server model that in turn helps to enhance client models. To this end, we propose FedHKT, a Hierarchical Knowledge Transfer framework for FL. The main idea of FedHKT is to allow clients with similar data distributions to collaboratively learn to specialize in certain classes, and then the specialized knowledge of clients is aggregated to train the server model. Specifically, we tailor a hybrid knowledge transfer mechanism for FedHKT, where the model parameters based and knowledge distillation (KD) based methods are respectively used for client-edge and edge-cloud knowledge transfer. Besides, to efficiently aggregate knowledge for conducive server model training, we propose a weighted ensemble distillation scheme with server-assisted knowledge selection, which aggregates knowledge by its prediction confidence, selects qualified knowledge during server model training, and uses selected knowledge to help improve client models. Extensive experiments demonstrate the superiority of FedHKT compared to state-of-the-art baselines.
Speaker Yongheng Deng (Tsinghua University)

Yongheng Deng received the B.S. degree from Nankai University, Tianjin, China, in 2019, and is currently pursuing the Ph.D. degree at the department of computer science and technology, Tsinghua University, Beijing, China. Her research interests include federated learning, edge intelligence, distributed system and mobile/edge computing.


Tackling System Induced Bias in Federated Learning: Stratification and Convergence Analysis

Ming Tang (Southern University of Science and Technology, China); Vincent W.S. Wong (University of British Columbia, Canada)

0
In federated learning, clients cooperatively train a global model by training local models over their datasets under the coordination of a central server. However, clients may sometimes be unavailable for training due to their network connections and energy levels. Considering the highly non-independent and identically distributed (non-IID) degree of the clients' datasets, the local models of the available clients being sampled for training may not represent those of all other clients. This is referred as system induced bias. In this work, we quantify the system induced bias due to time-varying client availability. The theoretical result shows that this bias occurs independently of the number of available clients and the number of clients being sampled in each training round. To address system induced bias, we propose a FedSS algorithm by incorporating stratified sampling and prove that the proposed algorithm is unbiased. We quantify the impact of system parameters on the algorithm performance and derive the performance guarantee of our proposed FedSS algorithm. Theoretical and experimental results on CIFAR-10 and MNIST datasets show that our proposed FedSS algorithm outperforms several benchmark algorithms by up to 5.1 times in terms of the algorithm convergence rate.
Speaker Ming Tang (Southern University of Science and Technology )

Ming Tang is an Assistant Professor in the Department of Computer Science and Engineering at Southern University of Science and Technology, Shenzhen, China. She received her Ph.D. degree from the Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, China, in Sep. 2018. She worked as a postdoctoral research fellow at The University of British Columbia, Vancouver, Canada, from Nov. 2018 to Jan. 2022. Her research interests include mobile edge computing, federated learning, and network economics. 


FedSDG-FS: Efficient and Secure Feature Selection for Vertical Federated Learning

Anran Li (Nanyang Technological University, Singapore); Hongyi Peng (Nanyang Technological University, Singapore & Alibaba Group, China); Lan Zhang and Jiahui Huang (University of Science and Technology of China, China); Qing Guo, Han Yu and Yang Liu (Nanyang Technological University, Singapore)

1
Vertical Federated Learning (VFL) enables multiple data owners, each holding a different subset of features about the same set of data sample(s), to jointly train a useful global model. Feature selection (FS) is important to VFL. It is still an open research problem as existing FS works designed for VFL either assumes prior knowledge on the number of noisy features or prior knowledge on the post-training threshold of useful features to be selected, making them unsuitable for practical applications. To bridge this gap, we propose the Federated Stochastic Dual-Gate based Feature Selection (FedSDG-FS) approach. It consists of a Gaussian stochastic dual-gate to efficiently approximate the probability of a feature being selected, with privacy protection through Partially Homomorphic Encryption without a trusted third-party. To reduce overhead, we propose a feature importance initialization method based on Gini impurity which can accomplish its goals with only two parameter transmissions between the server and the clients. Extensive experiments on both synthetic and real-world datasets show that FedSDG-FS significantly outperforms existing approaches in terms of achieving more accurate selection of high-quality features as well as building global models with higher performance.
Speaker Anran Li (Nanyang Technological University)

Anran Li is currently the Research Fellow at Nanyang Technological University under the supervision of Prof. Yang Liu. She received her Ph.D degree from the School of Computer Science and Technology, University of Science and Technology of China, under the supervision of Prof. Xiangyang Li and Prof. Lan Zhang.



Joint Participation Incentive and Network Pricing Design for Federated Learning

Ningning Ding (Northwestern University, USA); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

1
Federated learning protects users' data privacy though sharing users' local model parameters (instead of raw data) with a server. However, when massive users train a large machine learning model through federated learning, the dynamically varying and often heavy communication overhead can put significant pressure on the network operator. The operator may choose to dynamically change the network prices in response, which will eventually affect the payoffs of the server and users. This paper considers the under-explored yet important issue of the joint design of participation incentives (for encouraging users' contribution to federated learning) and network pricing (for managing network resources). Due to heterogeneous users' private information and multi-dimensional decisions, the optimization problems in Stage I of multi-stage games are non-convex. Nevertheless, we are able to analytically derive the corresponding optimal contract and pricing mechanism through proper transformations of constraints, variables, and functions, under both vertical and horizontal interaction structures of the participants. We show that the vertical structure is better than the horizontal one, as it avoids the interests misalignment between the server and the network operator. Numerical results based on real-world datasets show that our proposed mechanisms decrease server's cost by up to 24.87% comparing with the state-of-the-art benchmarks.
Speaker Ningning Ding (Northwestern University)

Ningning Ding received the B.S. degree in information engineering from Southeast University, Nanjing, China, in 2018, and the Ph.D. degree in information engineering from The Chinese University of Hong Kong in 2022. She is currently a Post-Doctoral Fellow with the Department of Electrical and Computer Engineering, Northwestern University, USA. Her primary research interests are in the interdisciplinary area between network economics and machine learning, with current emphasis on pricing and incentive mechanism design for federated learning, distributed coded machine learning, and IoT systems.


Session Chair

Jiangchuan Liu


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.